AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
RLHF reward modeling

# RLHF reward modeling

Fsfairx LLaMA3 RM V0.1
A reward model trained on Meta-Llama-3-8B-Instruct for reward modeling in RLHF processes, supporting PPO, iterative SFT, and iterative DPO methods.
Large Language Model Transformers
F
sfairXC
4,157
56
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase